Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems

نویسندگان

چکیده

We develop stochastic first-order primal-dual algorithms to solve a class of convex-concave saddle-point problems. When the saddle function is strongly convex in primal variable, we first restart scheme for this problem. gradient noises obey sub-Gaussian distributions, oracle complexity our strictly better than any existing methods, even deterministic case. Furthermore, each problem parameter interest, whenever lower bound exists, either optimal or nearly (up log factor). The subroutine used itself new algorithm developed where nonstrongly variable. This algorithm, which based on hybrid framework, achieves state-of-the-art and may be independent interest.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems

This article proposes a new algorithm for solving a class of composite convex-concave saddlepoint problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for any constant choice of proximal stepsize. More...

متن کامل

An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

This paper describes an accelerated HPE-type method based on general Bregman distances for solving monotone saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [28] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm pre...

متن کامل

Saddle Point Seeking for Convex Optimization Problems

In this paper, we consider convex optimization problems with constraints. By combining the idea of a Lie bracket approximation for extremum seeking systems and saddle point algorithms, we propose a feedback which steers a single-integrator system to the set of saddle points of the Lagrangian associated to the convex optimization problem. We prove practical uniform asymptotic stability of the se...

متن کامل

A simple algorithm for a class of nonsmooth convex-concave saddle-point problems

This supplementary material includes numerical examples demonstrating the flexibility and potential of the algorithm PAPC developed in the paper. We show that PAPC does behave numerically as predicted by the theory, and can efficiently solve problems which cannot be solved by well known state of the art algorithms sharing the same efficiency estimate. Here for illustration purposes, we compare ...

متن کامل

Preconditioned Douglas-Rachford Splitting Methods for Convex-concave Saddle-point Problems

We propose a preconditioned version of the Douglas-Rachford splitting method for solving convexconcave saddle-point problems associated with Fenchel-Rockafellar duality. It allows to use approximate solvers for the linear subproblem arising in this context. We prove weak convergence in Hilbert space under minimal assumptions. In particular, various efficient preconditioners are introduced in th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Operations Research

سال: 2022

ISSN: ['0364-765X', '1526-5471']

DOI: https://doi.org/10.1287/moor.2021.1175